Local Feedback Multilayered Networks

نویسندگان

  • Paolo Frasconi
  • Marco Gori
  • Giovanni Soda
چکیده

In this paper, we investigate the capabilities of Local Feedback Multi-Layered Networks, a particular class of recurrent networks, in which feedback connections are only allowed from neurons to themselves. In this class, learning can be accomplished by an algorithm which is local in both space and time. We describe the limits and properties of these networks and give some insights on their use for solving practical problems.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Feedback Based Mechanism for Video Streaming Over Multipath Ad Hoc Networks

Ad hoc networks are multi-hop wireless networks without a pre-installed infrastructure. Such networks are widely used in military applications and in emergency situations as they permit the establishment of a communication network at very short notice with a very low cost. Video is very sensitive for packet loss and wireless ad-hoc networks are error prone due to node mobility and weak links. H...

متن کامل

On-line learning algorithms for locally recurrent neural networks

This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN's). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose on-line version, causal re...

متن کامل

Causal Back Propagation through Time for Locally Recurrent Neural Networks

This paper concerns dynamic neural networks for signal processing: architectural issues are considered but the paper focuses on learning algorithms that work on-line. Locally recurrent neural networks, namely MLP with IIR synapses and generalization of Local Feedback MultiLayered Networks (LF MLN), are compared to more traditional neural networks, i.e. static MLP with input and/or output buffer...

متن کامل

Casual BackPropagation Through Time for Locally Recurrent Neural Networks

This paper concerns dynamic neural networks for signal processing: architectural issues are considered but the paper focuses on learning algorithms that work on-line. Locally recurrent neural networks, namely MLP with IIR synapses and generalization of Local Feedback MultiLayered Networks (LF MLN), are compared to more traditional neural networks, i.e. static MLP with input and/or output buffer...

متن کامل

Towards a Mathematical Understanding of the Difficulty in Learning with Feedforward Neural Networks

Despite the recent success of deep neural networks in various applications, designing and training deep neural networks is still among the greatest challenges in the field. In this work, we address the challenge of designing and training feedforward Multilayer Perceptrons (MLPs) from a smooth optimisation perspective. By characterising the critical point conditions of an MLP based loss function...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neural Computation

دوره 4  شماره 

صفحات  -

تاریخ انتشار 1992